近年来,以用户为中心的应用程序有所增长,这些应用程序需要在低数据制度中跨任务进行有效的知识转移。一个示例是个性化,通过学习少量属于特定用户的标记数据,可以调整一个预处理的系统。这种设置需要在低计算复杂性下高精度,因此准确性的帕累托前沿与适应性成本起着至关重要的作用。在本文中,我们将在几个摄影图像分类设置中推动此帕累托前沿,并具有两个关键的贡献:(i)一个称为上下文挤压和兴奋(案例)的新型自适应块,该块在新任务上调整了预处理的神经网络,以显着通过用户数据(上下文)的单个正向通过,以及(ii)基于称为大写的坐标培训协议(II)的混合训练协议,以提高性能,该协议利用了元训练的情况块和微调例程,以进行有效的适应。大写在VTAB+MD的26个数据集和充满挑战的现实世界个性化基准(Orbit)上,相对于元学习者的新最先进的准确性(轨道),从而通过领先的微调方法缩小了差距自适应成本较低的数量级。
translated by 谷歌翻译
现代的深度学习系统越来越多地部署在个性化和联合学习等情况下,需要支持i)学习少量数据,ii)沟通有效的分布式培训协议。在这项工作中,我们开发了胶片转移(FIT),该胶片在图像分类设置中满足了这些要求。 FIT使用自动配置的幼稚贝叶斯分类器在固定的主链上,该主链在大型图像数据集上仔细考虑。参数有效膜层用于调节主链,从而为下游任务塑造表示形式。该网络通过情节微调协议进行培训。该方法是参数效率的,这对于能够实现几次学习,廉价的个性化模型更新以及沟通有效的联合学习的关键。我们尝试适合各种下游数据集,并表明它可以比最先进的大型转移(位)算法在低射击和挑战性的VTAB-1K基准上获得更好的分类准确性,该算法的精度少于1%可更新参数。最后,我们证明了在分布式低弹药应用中拟合的参数效率,包括模型个性化和联合学习,其中模型更新大小是重要的性能指标。
translated by 谷歌翻译
Artificial Intelligence (AI) and Machine Learning (ML) are weaving their way into the fabric of society, where they are playing a crucial role in numerous facets of our lives. As we witness the increased deployment of AI and ML in various types of devices, we benefit from their use into energy-efficient algorithms for low powered devices. In this paper, we investigate a scale and medium that is far smaller than conventional devices as we move towards molecular systems that can be utilized to perform machine learning functions, i.e., Molecular Machine Learning (MML). Fundamental to the operation of MML is the transport, processing, and interpretation of information propagated by molecules through chemical reactions. We begin by reviewing the current approaches that have been developed for MML, before we move towards potential new directions that rely on gene regulatory networks inside biological organisms as well as their population interactions to create neural networks. We then investigate mechanisms for training machine learning structures in biological cells based on calcium signaling and demonstrate their application to build an Analog to Digital Converter (ADC). Lastly, we look at potential future directions as well as challenges that this area could solve.
translated by 谷歌翻译
Building a quantum analog of classical deep neural networks represents a fundamental challenge in quantum computing. A key issue is how to address the inherent non-linearity of classical deep learning, a problem in the quantum domain due to the fact that the composition of an arbitrary number of quantum gates, consisting of a series of sequential unitary transformations, is intrinsically linear. This problem has been variously approached in the literature, principally via the introduction of measurements between layers of unitary transformations. In this paper, we introduce the Quantum Path Kernel, a formulation of quantum machine learning capable of replicating those aspects of deep machine learning typically associated with superior generalization performance in the classical domain, specifically, hierarchical feature learning. Our approach generalizes the notion of Quantum Neural Tangent Kernel, which has been used to study the dynamics of classical and quantum machine learning models. The Quantum Path Kernel exploits the parameter trajectory, i.e. the curve delineated by model parameters as they evolve during training, enabling the representation of differential layer-wise convergence behaviors, or the formation of hierarchical parametric dependencies, in terms of their manifestation in the gradient space of the predictor function. We evaluate our approach with respect to variants of the classification of Gaussian XOR mixtures - an artificial but emblematic problem that intrinsically requires multilevel learning in order to achieve optimal class separation.
translated by 谷歌翻译
Few-shot learning (FSL) is a central problem in meta-learning, where learners must efficiently learn from few labeled examples. Within FSL, feature pre-training has recently become an increasingly popular strategy to significantly improve generalization performance. However, the contribution of pre-training is often overlooked and understudied, with limited theoretical understanding of its impact on meta-learning performance. Further, pre-training requires a consistent set of global labels shared across training tasks, which may be unavailable in practice. In this work, we address the above issues by first showing the connection between pre-training and meta-learning. We discuss why pre-training yields more robust meta-representation and connect the theoretical analysis to existing works and empirical results. Secondly, we introduce Meta Label Learning (MeLa), a novel meta-learning algorithm that learns task relations by inferring global labels across tasks. This allows us to exploit pre-training for FSL even when global labels are unavailable or ill-defined. Lastly, we introduce an augmented pre-training procedure that further improves the learned meta-representation. Empirically, MeLa outperforms existing methods across a diverse range of benchmarks, in particular under a more challenging setting where the number of training tasks is limited and labels are task-specific. We also provide extensive ablation study to highlight its key properties.
translated by 谷歌翻译
Tree-based machine learning algorithms provide the most precise assessment of the feasibility for a country to export a target product given its export basket. However, the high number of parameters involved prevents a straightforward interpretation of the results and, in turn, the explainability of policy indications. In this paper, we propose a procedure to statistically validate the importance of the products used in the feasibility assessment. In this way, we are able to identify which products, called explainers, significantly increase the probability to export a target product in the near future. The explainers naturally identify a low dimensional representation, the Feature Importance Product Space, that enhances the interpretability of the recommendations and provides out-of-sample forecasts of the export baskets of countries. Interestingly, we detect a positive correlation between the complexity of a product and the complexity of its explainers.
translated by 谷歌翻译
We develop Bayesian neural networks (BNNs) that permit to model generic nonlinearities and time variation for (possibly large sets of) macroeconomic and financial variables. From a methodological point of view, we allow for a general specification of networks that can be applied to either dense or sparse datasets, and combines various activation functions, a possibly very large number of neurons, and stochastic volatility (SV) for the error term. From a computational point of view, we develop fast and efficient estimation algorithms for the general BNNs we introduce. From an empirical point of view, we show both with simulated data and with a set of common macro and financial applications that our BNNs can be of practical use, particularly so for observations in the tails of the cross-sectional or time series distributions of the target variables.
translated by 谷歌翻译
数据的表示对于机器学习方法至关重要。内核方法用于丰富特征表示,从而可以更好地概括。量子内核有效地实施了在量子系统的希尔伯特空间中编码经典数据的有效复杂的转换,甚至导致指数加速。但是,我们需要对数据的先验知识来选择可以用作量子嵌入的适当参数量子电路。我们提出了一种算法,该算法通过组合优化过程自动选择最佳的量子嵌入过程,该过程修改了电路的结构,更改门的发生器,其角度(取决于数据点)以及各种门的QUBIT行为。由于组合优化在计算上是昂贵的,因此我们基于均值周围的核基质系数的指数浓度引入了一个标准,以立即丢弃任意大部分的溶液,这些溶液被认为性能较差。与基于梯度的优化(例如可训练的量子内核)相反,我们的方法不受建筑贫瘠的高原影响。我们已经使用人工和现实数据集来证明相对于随机生成的PQC的方法的提高。我们还比较了不同优化算法的效果,包括贪婪的局部搜索,模拟退火和遗传算法,表明算法选择在很大程度上影响了结果。
translated by 谷歌翻译
时间序列预测是一个重要的问题,具有许多现实世界的应用。深度神经网络的合奏最近实现了令人印象深刻的预测准确性,但是在许多现实世界中,如此大的合奏是不切实际的。变压器模型已成功应用于各种具有挑战性的问题。我们建议对原始变压器体系结构进行新颖的改编,重点是时间序列预测的任务,称为持久性初始化。该模型通过使用与残留跳过连接的乘法门控机制初始化为幼稚的持久性模型。我们使用具有REZERO标准化和旋转位置编码的解码器变压器,但适应适用于任何自动回归神经网络模型。我们评估了有关挑战性M4数据集的拟议体系结构,与基于合奏的方法相比,取得了竞争性能。我们还将最近提议的变压器模型进行比较,以预测时间序列,显示了M4数据集中的卓越性能。广泛的消融研究表明,持久性初始化会导致更好的性能和更快的收敛性。随着模型的大小的增加,只有我们提出的适应性增长的模型。我们还进行了一项额外的消融研究,以确定正常化和位置编码的选择的重要性,并发现旋转编码的使用和REZERO归一化对于良好的预测性能至关重要。
translated by 谷歌翻译
最先进的深度学习模型通常经过大量昂贵的标签培训数据培训。但是,需要详尽的手动注释可能会降低该模型在有限标签制度中的普遍性。半监督的学习和无监督的学习提供了有希望的范式,可以从大量未标记的视觉数据中学习。这些范式的最新进展表明,利用未标记的数据来改善模型概括并提供更好的模型初始化的良好好处。在这项调查中,我们从统一的角度回顾了有关半监督学习(SSL)和无监督学习(UL)的最新高级深度学习算法(SSL)。为了对这些领域的最先进的整体了解,我们提出了统一的分类法。我们将现有代表性SSL和UL分类为全面而有见地的分析,以在不同的计算机视觉任务中的不同学习场景和应用中突出其设计理由。最后,我们讨论了SSL和UL的新兴趋势和公开挑战,以阐明未来的关键研究方向。
translated by 谷歌翻译